Concentration Bounds Lecturer : Sushant Sachdeva Scribe : Cyril

نویسندگان

  • Sushant Sachdeva
  • Cyril Zhang
چکیده

A way to think about what this proof is doing: X dominates the scaled indicator variable T = t · 1X≥t, so we have E[X] ≥ E[T ] = t · Pr[X ≥ t]. Markov’s inequality gives a rather weak bound when applied directly; the random variables we care about are usually much more highly concentrated. Let’s look at a toy example: flip 100 coins. What’s the probability that at least 70 of them come up heads? Markov’s inequality tells us that it’s no greater than 5/7. As we’ll see, we can do much better.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lecturer : David P . Williamson Scribe : Faisal Alkaabneh

Today we will look at a matrix analog of the standard scalar Chernoff bounds. This matrix analog will be used in the next lecture when we talk about graph sparsification. While we’re more interested in the application of the theorem than its proof, it’s still useful to see the similarities and the differences of moving from the proof of the result for scalars to the same result for matrices. Sc...

متن کامل

Matrix Inversion Is As Easy As Exponentiation

Abstract. We prove that the inverse of a positive-definite matrix can be approximated by a weighted-sum of a small number of matrix exponentials. Combining this with a previous result [6], we establish an equivalence between matrix inversion and exponentiation up to polylogarithmic factors. In particular, this connection justifies the use of Laplacian solvers for designing fast semi-definite pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015